Release Notes for Q1 2025

Explore the new features and enhancements added to this update!

Updated in: April 2025

Release Version: 1.20

Feature/ Enhancement Description
Branding and visual identity refresh – Introducing Calibo Accelerate
  • We are excited to announce that Lazsa Platform is now Calibo Accelerate, a name that reflects our vision of helping organizations innovate faster and bring ideas to market more efficiently. “Accelerate” embodies our commitment to speed and agility in digital value creation.

  • As part of this release, we are rolling out enhancements to our logo, name, color themes, and key UI elements across the platform to deliver a more modern, cohesive, and user-friendly experience.

  • To ensure a seamless transition, these changes will be introduced in phases across upcoming releases.

  • This rebranding is part of a larger brand refresh, including a new website and updated visuals. Explore the new look at www.calibo.com.

Support for Unity Catalog-enabled Databricks for data integration, data transformation, and data quality jobs

The Calibo Accelerate platform now supports Unity Catalog-enabled Databricks for data integration, data transformation, and data quality jobs. This new capability allows seamless ingestion from a variety of supported data sources and enables loading data into multiple lakehouses. It ensures effective data governance through Unity Catalog, enhancing the overall data management and security.

See Unity Catalog Integration with Calibo Accelerate platform

The following features are available for Unity Catalog:

  • Supported Data Sources for Data Integration Jobs:

    • FTP

    • SFTP

    • REST API

    • CSV

    • Microsoft Excel

    • Parquet

    • Calibo Ingestion Catalog

    • MySQL

    • MS SQL Server

    • PostgreSQL

    • Oracle

    • Snowflake

  • Data Consistency Through Schema Mapping - Schema mapping of columns of the source table with columns in the target table is now available. By defining the relationship between the source and target columns, schema mapping ensures that the data conforms to a specific structure, thus helping to maintain data consistency for data coming from disparate sources.

  • Accuracy and Reliability Through Data Constraints - Defining constraints on the columns of the source table, ensures that users can prevent errors by enforcing rules during data transformation and processing.

  • Query Performance and Efficiency - Partitioning of data enhances query performance, improves resource utilization, and ensures efficient handling of large datasets.

  • Security and Compliance with Granular Permissions - Assigning granular permissions to Unity Catalog objects like schema and tables provides advantages like security, data governance, risk mitigation, and compliance with regulatory requirements.

  • External Locations for Secure Data Access - You can now create an external location for Databricks Unity Catalog. This is a data storage location outside the Databricks workspace, but is registered with, and managed by Unity Catalog. The location can be used to read and write data in a governed and secure manner.

  • Data Ingestion with Databricks Autoloader - Support is now available for using Databricks Autoloader with Amazon S3 or ADLS as data source while ingesting data into Databricks Lakehouse.

  • Provisioning Permissions for Unity Catalog objects - Ability to provision permissions to Unity Catalog objects like schema and tables is now available from within Calibo Accelerate.

  • Using SQL Warehouse for Databricks Transformation Jobs - Support for selecting or creating a SQL Warehouse is now available for Databricks custom transformation jobs. This provides significant benefits in terms of performance, cost efficiency, scalability, and ease of use. You can choose SQL warehouse or Spark cluster for the transformation job depending on the type of data, type of queries, and performance requirements.

Sample code populated for custom algorithm in Jupyter Notebook

Based on the selected source and target for a custom algorithm pipeline, sample code is populated in the Jupyter Notebook. You can use this sample code as a reference to create custom code.

See Create Custom Algorithm for a Data Analytics Job

Support for data technologies deployed on Azure

Support for templatized and custom data integration and data transformation is now available for technologies deployed on Azure, enabling seamless integration of the Azure ecosystem into Data Pipeline Studio. This includes:

  • ADLS as a source and a target

  • Databricks deployed on an Azure instance used in data integration and data transformation

See Custom Integration with target as Azure Data Lake

See Custom Transformation with Azure Data Lake

Search and filter options for data crawlers and data catalogs

Search and filter options are now available for data crawlers and data catalogs at both the portfolio and product levels to enhance usability and manageability, leading to more effective data utilization, discovery, and governance.

See Data Crawler and Data Catalog

Support for Application Load Balancer (ALB) for technology deployment on EKS (AWS) and AKS (Microsoft Azure) clusters In addition to Nginx Ingress Controller, Calibo Accelerate now supports Application Load Balancer (ALB) for technology deployment on a Kubernetes cluster. This support is available both for EKS (AWS) and AKS(Microsoft Azure) clusters.
Support for creating empty repositories in a source code management tool from the Develop phase You can now create an empty repository in your source code repository tool from the Develop phase of your feature in Calibo Accelerate. An empty repository can be used for storing scripts, documentation, configuration files, templates, or other development assets, providing greater flexibility in managing project resources.
AWS resource IDs now displayed with associated names for better clarity in the UI AWS resources such as VPC IDs, subnet IDs, and AMI IDs are now displayed with their associated names [for example, ami-XXXXXXXXXXXXXXX (custom-ami-name), subnet-********** (custom-subnet-name)], making it easier to identify resources in the Calibo Accelerate UI.
Breadcrumb navigation added for easier access to products in a portfolio The breadcrumb trail (Portfolios / [Portfolio Name] / [Product Name]) is now available on the Portfolios screen. When you access a product through the Portfolios screen, the breadcrumb navigation helps you understand your location within the platform and makes it easier to switch between products within the same portfolio.
Support for the latest versions of technologies
  • Kubernetes 1.30 Support

    Calibo Accelerate now supports Kubernetes 1.30 for deployments. Applications running on Kubernetes 1.29 will remain operational.

  • Support for SonarQube 10.x

    Calibo Accelerate now supports SonarQube 10.x for code quality scans in a CI/CD pipeline.

  • React 19.0.0 Support

    Calibo Accelerate now supports React 19.0.0 with the following setups/configurations:

    • React 19.0.0

    • React 19.0.0 with Typescript – Yarn

    • React 19.0.0 without Typescript

    • React 19.0.0 without Typescript – Yarn

To view all the supported tools and technologies, see Tools and Technologies Integrated with Calibo Accelerate platform

Links to CI/CD pipeline, Jenkins job, and build log added for deployment error troubleshooting To improve troubleshooting during technology deployment failures, you can now access direct links to the related CI/CD pipeline, Jenkins job, and build log below the error message. These contextual links provide deeper visibility into the failure and help you quickly identify and resolve issues.
Mandatory upgrade: Calibo Accelerate Orchestrator Agent 1.1.95 A new version (1.1.95) of the Calibo Accelerate Orchestrator Agent is now available. Upgrading to this version is mandatory to ensure a seamless experience. For upgrade steps, see Updating Calibo Accelerate Orchestrator Agent .